翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

particle swarm optimization : ウィキペディア英語版
particle swarm optimization
In computer science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. PSO optimizes a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position but, is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.
PSO is originally attributed to Kennedy, Eberhart and Shi〔〔 and was first intended for simulating social behaviour,〔 as a stylized representation of the movement of organisms in a bird flock or fish school. The algorithm was simplified and it was observed to be performing optimization. The book by Kennedy and Eberhart〔 describes many philosophical aspects of PSO and swarm intelligence. An extensive survey of PSO applications is made by Poli.〔〔
PSO is a metaheuristic as it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics such as PSO do not guarantee an optimal solution is ever found. More specifically, PSO does not use the gradient of the problem being optimized, which means PSO does not require that the optimization problem be differentiable as is required by classic optimization methods such as gradient descent and quasi-newton methods. PSO can therefore also be used on optimization problems that are partially irregular, noisy, change over time, etc.
== Algorithm ==

A basic variant of the PSO algorithm works by having a population (called a swarm) of candidate solutions (called particles). These particles are moved around in the search-space according to a few simple formulae. The movements of the particles are guided by their own best known position in the search-space as well as the entire swarm's best known position. When improved positions are being discovered these will then come to guide the movements of the swarm. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered.
Formally, let ''f'': (unicode:ℝ)''n'' → (unicode:ℝ) be the cost function which must be minimized. The function takes a candidate solution as argument in the form of a vector of real numbers and produces a real number as output which indicates the objective function value of the given candidate solution. The gradient of ''f'' is not known. The goal is to find a solution a for which ''f''(a) ≤ ''f''(b) for all b in the search-space, which would mean a is the global minimum. Maximization can be performed by considering the function ''h'' = -''f'' instead.
Let ''S'' be the number of particles in the swarm, each having a position xi ∈ (unicode:ℝ)''n'' in the search-space and a velocity vi ∈ (unicode:ℝ)''n''. Let pi be the best known position of particle ''i'' and let g be the best known position of the entire swarm. A basic PSO algorithm is then:〔

* For each particle ''i'' = 1, ..., ''S'' do:
*
* Initialize the particle's position with a uniformly distributed random vector: xi ~ ''U''(blo, bup), where blo and bup are the lower and upper boundaries of the search-space.
*
* Initialize the particle's best known position to its initial position: pi ← xi
*
* If (''f''(pi) < ''f''(g)) update the swarm's best known position: g ← pi
*
* Initialize the particle's velocity: vi ~ ''U''(-|bup-blo|, |bup-blo|)
* Until a termination criterion is met (e.g. number of iterations performed, or a solution with adequate objective function value is found), repeat:
*
* For each particle ''i'' = 1, ..., ''S'' do:
*
*
* For each dimension ''d'' = 1, ..., ''n'' do:
*
*
*
* Pick random numbers: ''r''p, ''r''g ~ ''U''(0,1)
*
*
*
* Update the particle's velocity: vi,d ← ω vi,d + φp ''r''p (pi,d-xi,d) + φg ''r''g (gd-xi,d)
*
*
* Update the particle's position: xi ← xi + vi
*
*
* If (''f''(xi) < ''f''(pi)) do:
*
*
*
* Update the particle's best known position: pi ← xi
*
*
*
* If (''f''(pi) < ''f''(g)) update the swarm's best known position: g ← pi
* Now g holds the best found solution.
The parameters ω, φp, and φg are selected by the practitioner and control the behaviour and efficacy of the PSO method, see below.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「particle swarm optimization」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.